Freedom, Hierarchies and Saturated Regression (WIP)

probability
multilevel models
hierarchical models
differences in differences
Published

July 1, 2024

import pandas as pd
import numpy as np
import bambi as bmb
import pymc as pm
import arviz as az
import matplotlib.pyplot as plt
from matplotlib import transforms
from itertools import product
import pyfixest as pf
import statsmodels.formula.api as smf
from patsy import dmatrices, dmatrix

np.random.seed(100)
Summary

WORK IN PROGRESS

Inference, causal and otherwise, is biased in the presence of implicit group level confounds. Regression modelling can account for these risks but we have to appropriately condition our inference steps on the right group level effects. It is not always obvious how to incorporate these factors in our modelling. Fixed Effects and (Hierarchical) Random Effects are variously use to to express variation in the outcome due to group level effects. Questions become more complicated again when we have to account for variation across multiple groups interactions.

Two Way Fixed Effects (TWFE) regression models are often used in Differences-in-Differences causal inference designs to estimate treatment effects while accounting for variation due to group effects in two dimensions. We will explore the relationship between TWFE and Mundlak like regressions in DiD and Event Study designs, showing how sound inferential strategies rely on saturated models with heavily parameterised specifications. In particular we’ll see how TWFE breaks down in cases of staggered treatment regimes, and how to fix it.

Architectures and Free Parameters

“There are no rules of architecture for a castle in the clouds.” - G.K. Chesterton

Freedom is not whimsy. You act and learn within constraints, struggling to discern the shape of the tracks on which you travel. Hurtling into the future, you pivot at junctions, hoping for the best. You may imagine your next choice will be the one where everything falls into place. All the plans are arrayed, but outcomes are confounded by a plethora of interaction effects and understanding only arrives incrementally. You may seek to learn from the past with a cursory review of history, but detail matters because it structures the implications. It is these structural details of group membership and hierarchies that determine the path of the tracks, and the architecture the holds up your future.

In statistical modelling we seek to learn from the past to discern the shape of the processes which determine the future. Different models yield different insights into those structures which govern our trajectories. Which parameters are “free”, and which are decisively characterised? Which realities are contestable and which non-negotiable? In this post we will dive into the way in which different inferential methodologies yield different insights into how group-level confounding constrains the realisations of our parameter space and shapes the architecture of causal inference.

Estimation and Group Effects

First consider an example due to Richard McElreath’s lecture series where we examine the various parameter recovery options available in the case of group level confounding. We define a data generating process determined by group level effects:

def inv_logit(p):
    return np.exp(p) / (1 + np.exp(p))

N_groups = 30
N_id = 3000
a0 = -2
bXY = 1
bZY = -0.5
g = np.random.choice(range(N_groups), size=N_id, replace=True)
Ug = np.random.normal(1.5, size=N_groups)

X = np.random.normal(Ug[g], size=N_id)
Z = np.random.normal(size=N_groups)

s = a0 + bXY*X + Ug[g] + bZY*Z[g]
p = inv_logit(s)
Y = np.random.binomial(n=1, p=p)


sim_df = pd.DataFrame({'Y': Y, 'p': p, 's': s, 'g': g, 'X': X, 'Z': Z[g]})
sim_df
Y p s g X Z
0 1 0.943243 2.810542 8 1.384072 -1.818302
1 0 0.593872 0.379996 24 1.047128 -0.242817
2 1 0.874945 1.945406 3 2.340418 -0.650122
3 0 0.317089 -0.767181 7 0.550504 0.795680
4 1 0.699030 0.842684 23 1.183011 -0.111692
... ... ... ... ... ... ...
2995 1 0.996046 5.529008 8 4.102537 -1.818302
2996 0 0.339112 -0.667253 3 -0.272241 -0.650122
2997 1 0.662931 0.676383 22 2.395946 0.817874
2998 1 0.846683 1.708821 15 1.568391 1.026179
2999 1 0.995504 5.399987 8 3.973517 -1.818302

3000 rows × 6 columns

This is a bernoulli outcome with group level confounds. If we model this relationship the confounding effects will bias naive parameter estimates on the covariates \(X\), \(Z\). Seeing different results as we explore different ways of parameterising the relationships.

Warning

There is a huge degree of confusion over the meaing of the terms “Fixed Effects” and “Random Effects”. Within this blog post we will mean the population level parameters. \[\beta X\]

for an individual variable \(X\) when we refer to fixed effects. Corresspondingly we will refer to group-level parameters \(\beta_{g}\)

\[\Big(\underbrace{\beta}_{pop} + \underbrace{\beta_{g}}_{group}\Big)X\]

which are incorporated into our model equation modifying population level parameters as random effects. We will generally use Wilkison notation to specify these choices where random effects for modifying a population are denoted with a conditional bar over the variable ( X | group) and fixed effects are specified by just including the variable in the equation i.e. y ~ X + (Z | group) where X has a fixed effect parameterisation and Z a random effects parameterisation. We can also create indicator variables for group membership using this syntax with y ~ C(group) + X + Z where under the hood we pivot the group category into a zero-one variable indicating group membership. This parameterisation means each indicator variable for each level of the grouping variable will receive a fixed effects population parameter.

Naive Model

naive_model = bmb.Model(f"Y['>.5'] ~ X + Z ", sim_df, 
family="bernoulli")
naive_idata = naive_model.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
az.summary(naive_idata, var_names=['Intercept', 'X', 'Z'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept -0.993 0.076 -1.134 -0.846 0.001 0.001 4771.0 2856.0 1.0
X 1.310 0.053 1.216 1.412 0.001 0.001 2814.0 2987.0 1.0
Z -0.614 0.049 -0.706 -0.518 0.001 0.001 3658.0 2985.0 1.0

Here we see that all three parameter estimates are biased away from their true values. Let’s try a simple fixed effects approach that adds indicator variables for all but one of the group levels.

Fixed Effects Model

fixed_effects_model = bmb.Model(f"Y['>.5'] ~ C(g) + X + Z", sim_df, 
family="bernoulli")
fixed_effects_idata = fixed_effects_model.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
az.summary(fixed_effects_idata, var_names=['Intercept', 'X', 'Z'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept -0.185 1.478 -2.983 2.539 0.071 0.050 433.0 549.0 1.01
X 0.980 0.061 0.855 1.084 0.001 0.001 2766.0 2389.0 1.00
Z -0.199 1.381 -2.892 2.346 0.066 0.047 437.0 601.0 1.01

Now we see the intercept term and the coefficient on the \(X\) variable seem correct, but the coefficient on \(Z\) is wildly wrong. Indeed the uncertainty interval on the \(Z\) coefficient is huge. The fixed effect model was unable to learn anything about the correct parameter. Whereas the naive model seems to learn the correct \(Z\) parameter but over estimates the \(X\) coefficient.

fig, axs = plt.subplots(2, 1, figsize=(10, 12))
axs = axs.flatten()

az.plot_posterior(naive_idata, var_names=['X'], ax=axs[0], 
point_estimate=None, color='red', label='Naive Model')
axs[0].axvline(1)

az.plot_posterior(fixed_effects_idata , var_names=['X'], ax=axs[0], point_estimate=None, hdi_prob='hide', label='Fixed Effect Models')
axs[0].set_title("Naive/Fixed Model X Coefficient")

az.plot_posterior(naive_idata, var_names=['Z'], ax=axs[1], point_estimate=None,  color='red', ref_val_color='black')


az.plot_posterior(fixed_effects_idata , var_names=['Z'], ax=axs[1], point_estimate=None, hdi_prob='hide')

axs[1].set_title("Naive/Fixed Effect Model Z Coefficient")
axs[1].axvline(-0.5);

We now want to try another approach to handle to the group confounding that involves a hierarchical approach to add group level effects to the intercept term.

Multilevel Model

multilevel_model = bmb.Model(f"Y['>.5'] ~ (1 | g) + X + Z", sim_df, 
family="bernoulli")
multilevel_model_idata = multilevel_model.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
az.summary(multilevel_model_idata)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept -0.571 0.202 -0.956 -0.197 0.008 0.006 657.0 1153.0 1.00
X 1.024 0.060 0.915 1.143 0.001 0.001 3472.0 2642.0 1.00
Z -0.602 0.167 -0.909 -0.258 0.006 0.005 694.0 911.0 1.01
1|g_sigma 0.934 0.168 0.630 1.246 0.006 0.004 841.0 1413.0 1.00
1|g[0] 0.703 0.326 0.098 1.322 0.010 0.007 1167.0 2120.0 1.00
1|g[1] 0.077 0.296 -0.510 0.608 0.008 0.006 1237.0 2059.0 1.00
1|g[2] 0.840 0.350 0.145 1.452 0.009 0.006 1594.0 2070.0 1.00
1|g[3] -0.291 0.318 -0.842 0.332 0.010 0.007 1004.0 2011.0 1.00
1|g[4] -1.468 0.444 -2.258 -0.607 0.010 0.007 2098.0 2641.0 1.00
1|g[5] 0.460 0.324 -0.129 1.090 0.008 0.006 1621.0 2483.0 1.00
1|g[6] -1.322 0.519 -2.265 -0.324 0.012 0.009 1831.0 2564.0 1.00
1|g[7] -0.212 0.288 -0.726 0.347 0.009 0.006 1129.0 2192.0 1.00
1|g[8] 0.152 0.516 -0.774 1.172 0.014 0.010 1290.0 2241.0 1.00
1|g[9] 0.039 0.548 -0.959 1.107 0.019 0.014 792.0 1479.0 1.01
1|g[10] -0.274 0.388 -0.999 0.464 0.013 0.009 904.0 1586.0 1.01
1|g[11] 0.501 0.308 -0.096 1.053 0.008 0.006 1505.0 2598.0 1.00
1|g[12] 0.318 0.384 -0.412 1.033 0.012 0.008 1071.0 1786.0 1.00
1|g[13] -0.268 0.332 -0.859 0.364 0.011 0.007 993.0 1882.0 1.00
1|g[14] 1.578 0.595 0.505 2.717 0.013 0.009 2234.0 2599.0 1.00
1|g[15] 1.043 0.385 0.313 1.758 0.010 0.007 1380.0 2794.0 1.00
1|g[16] 1.763 0.463 0.936 2.667 0.010 0.007 1989.0 2844.0 1.00
1|g[17] -0.333 0.283 -0.858 0.202 0.008 0.006 1218.0 1825.0 1.00
1|g[18] -0.103 0.517 -1.107 0.858 0.017 0.012 974.0 1417.0 1.00
1|g[19] -1.440 0.371 -2.132 -0.743 0.010 0.007 1504.0 2471.0 1.00
1|g[20] -0.107 0.359 -0.755 0.581 0.010 0.007 1230.0 1881.0 1.00
1|g[21] -0.739 0.307 -1.274 -0.126 0.008 0.005 1574.0 2321.0 1.00
1|g[22] -0.803 0.288 -1.352 -0.260 0.008 0.006 1148.0 2239.0 1.00
1|g[23] 0.561 0.322 -0.003 1.205 0.008 0.006 1462.0 2395.0 1.00
1|g[24] -0.077 0.304 -0.602 0.522 0.009 0.006 1129.0 2171.0 1.00
1|g[25] -0.927 0.294 -1.479 -0.389 0.008 0.006 1315.0 2207.0 1.00
1|g[26] 0.659 0.427 -0.195 1.392 0.012 0.009 1257.0 2191.0 1.00
1|g[27] -0.391 0.298 -0.945 0.168 0.008 0.006 1262.0 2432.0 1.00
1|g[28] 1.062 0.450 0.185 1.866 0.008 0.006 2910.0 2972.0 1.00
1|g[29] -0.868 0.290 -1.412 -0.340 0.008 0.006 1258.0 2147.0 1.00

This now starts to recover the \(X\) coefficient properly but is biased in the intercept an the \(Z\) coefficient leaves something to be desired. Next we’ll apply the Mundlak device method which adds the group mean back to each observation as a covariate.

Mundlak Model

sim_df['group_mean'] = sim_df.groupby('g')['X'].transform(np.mean)

sim_df['group_mean_Z'] = sim_df.groupby('g')['Z'].transform(np.mean)

mundlak_model = bmb.Model(f"Y['>.5'] ~ (1 | g) + X + Z + group_mean", sim_df, 
family="bernoulli")
mundlak_idata = mundlak_model.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
az.summary(mundlak_idata, var_names=['Intercept', 'X', 'Z'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept -1.908 0.141 -2.173 -1.642 0.002 0.002 4237.0 2898.0 1.0
X 0.952 0.058 0.848 1.070 0.001 0.001 5188.0 3017.0 1.0
Z -0.445 0.063 -0.568 -0.329 0.001 0.001 3676.0 2971.0 1.0

We can plot all the parameter recovery models together and we’ll see that there are some trade-offs between the fixed effects and random effects varieties of the modelling.

Plotting the Comparisons

fig, axs = plt.subplots(3, 1, figsize=(10, 12))
axs = axs.flatten()

az.plot_posterior(naive_idata, var_names=['X'], ax=axs[0], 
point_estimate=None, color='red', label='Naive')
axs[0].axvline(1, color='k', linestyle='--', label='True value')

az.plot_posterior(fixed_effects_idata , var_names=['X'], ax=axs[0], point_estimate=None, hdi_prob='hide', label='Fixed')

az.plot_posterior(multilevel_model_idata, var_names=['X'], ax=axs[0], point_estimate=None, hdi_prob='hide', color='green', label='Hierarchical')

az.plot_posterior(mundlak_idata, var_names=['X'], ax=axs[0], point_estimate=None, hdi_prob='hide', color='purple', label='Mundlak')


axs[0].set_title("X Coefficient")

az.plot_posterior(naive_idata, var_names=['Z'], ax=axs[1], point_estimate=None,  color='red', ref_val_color='black')


az.plot_posterior(fixed_effects_idata , var_names=['Z'], ax=axs[1], point_estimate=None, hdi_prob='hide')

az.plot_posterior(multilevel_model_idata, var_names=['Z'], ax=axs[1], point_estimate=None, hdi_prob='hide', color='green')

az.plot_posterior(mundlak_idata, var_names=['Z'], ax=axs[1], point_estimate=None, hdi_prob='hide', color='purple')

axs[1].set_title("Z Coefficient")
axs[1].axvline(-0.5, color='k', linestyle='--');

az.plot_posterior(naive_idata, var_names=['Intercept'], ax=axs[2], point_estimate=None,  color='red', ref_val_color='black')


az.plot_posterior(fixed_effects_idata , var_names=['Intercept'], ax=axs[2], point_estimate=None, hdi_prob='hide')

az.plot_posterior(multilevel_model_idata, var_names=['Intercept'], ax=axs[2], point_estimate=None, hdi_prob='hide', color='green')

az.plot_posterior(mundlak_idata, var_names=['Intercept'], ax=axs[2], point_estimate=None, hdi_prob='hide', color='purple')

axs[2].axvline(-2, color='k', linestyle='--');

Importantly, the fixed effects model is focused on recovering the treatment effect on the \(X\) covariate somewhat at the expense of accuracy on the other systematic components of the model. However, this focus renders the model less predictively accurate. Compare the models on the cross-validation score and we see how the hierarchical mundlak model is to be preferred.

compare_df = az.compare({'naive': naive_idata, 'fixed': fixed_effects_idata, 'hierarchical': multilevel_model_idata, 
'mundlak': mundlak_idata})
compare_df
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/arviz/stats/stats.py:805: UserWarning: Estimated shape parameter of Pareto distribution is greater than 0.7 for one or more samples. You should consider using a more robust model, this is because importance sampling is less likely to work well if the marginal posterior and LOO posterior are very different. This is more likely to happen with a non-robust model and highly influential observations.
  warnings.warn(
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/arviz/stats/stats.py:309: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'False' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
  df_comp.loc[val] = (
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/arviz/stats/stats.py:309: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'log' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
  df_comp.loc[val] = (
rank elpd_loo p_loo elpd_diff weight se dse warning scale
mundlak 0 -1209.408708 12.364828 0.000000 0.950732 30.229848 0.000000 False log
hierarchical 1 -1218.650711 27.842329 9.242003 0.000000 30.408096 4.428523 False log
fixed 2 -1220.981594 33.341152 11.572886 0.022735 31.690956 5.045867 True log
naive 3 -1295.483190 2.963360 86.074482 0.026533 29.508873 12.992456 False log
az.plot_compare(compare_df);
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/arviz/plots/backends/matplotlib/compareplot.py:87: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]`
  scale = comp_df["scale"][0]

Full Luxury Bayesian Mundlak Machine

As good Bayesians we might be worry about the false precision of adding simple point estimates for the group mean covariates in the Mundlak model. We can remedy this by explicitly incorporating these values as an extra parameter and adding uncertainty to the draws on these parameters.

id_indx, unique_ids = pd.factorize(sim_df["g"])

coords = {'ids': list(range(N_groups))}
with pm.Model(coords=coords) as model: 

    x_data = pm.Data('X_data', sim_df['X'])
    z_data = pm.Data('Z_data', sim_df['Z'])
    y_data = pm.Data('Y_data', sim_df['Y'])

    alpha0 = pm.Normal('Intercept', 0, 1)
    alpha_j = pm.Normal('alpha_j', 0, 1, dims='ids')
    beta_xy = pm.Normal('X', 0, 1)
    beta_zy = pm.Normal('Z', 0, 1)

    group_means = pm.Normal('group_means', sim_df.groupby('g')['X'].mean().values, .1, dims='ids')

    mu = pm.Deterministic('mu', (alpha0 + alpha_j[id_indx]) + beta_xy*x_data + beta_zy*z_data + group_means[id_indx])
    p = pm.Deterministic("p", pm.math.invlogit(mu))
    # likelihood
    pm.Binomial("y", n=1, p=p, observed=y_data)

    idata = pm.sample(idata_kwargs={"log_likelihood": True})
az.summary(idata, var_names=['Intercept', 'X', 'Z'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept -1.886 0.201 -2.273 -1.525 0.006 0.005 978.0 1953.0 1.0
X 1.006 0.058 0.900 1.115 0.001 0.001 6486.0 2869.0 1.0
Z -0.577 0.175 -0.903 -0.245 0.005 0.003 1320.0 1981.0 1.0

This model bakes more uncertainty into the process assuming a kind of measurement-error model which may be more or less apt depending on how much data you’ve acquired and your view of the underlying process. We’ll now examine how these considerations play out when there are multiple group-level influences.

Nested Groups and Fixed Effects

We’ve seen how various attempts to account for the group effects can more or less recover the parameters of a complex data generating process with group confounding. Now we want to look at a case where we can have interacting group effects at multiple levels.

Pupils within Class Rooms within Schools

A natural three level group hierarchy occurs in the context of educational organisations and business org-charts. We can use this fact to interrogate briefly how inferential statements about treatment effects vary as a function of what and how we control for group level variation. We draw the following data set from Linear Mixed Models: A Practical Guide Using Statistical Software.

df = pd.read_csv('classroom.csv')
df['class_mean'] = df.groupby(['classid'])['mathprep'].transform(np.mean)
df['school_mean'] = df.groupby(['schoolid'])['mathprep'].transform(np.mean)
df.head()
/var/folders/__/ng_3_9pn1f11ftyml_qr69vh0000gn/T/ipykernel_40828/2082513703.py:2: FutureWarning: The provided callable <function mean at 0x1093f9a80> is currently using SeriesGroupBy.mean. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "mean" instead.
  df['class_mean'] = df.groupby(['classid'])['mathprep'].transform(np.mean)
/var/folders/__/ng_3_9pn1f11ftyml_qr69vh0000gn/T/ipykernel_40828/2082513703.py:3: FutureWarning: The provided callable <function mean at 0x1093f9a80> is currently using SeriesGroupBy.mean. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "mean" instead.
  df['school_mean'] = df.groupby(['schoolid'])['mathprep'].transform(np.mean)
sex minority mathkind mathgain ses yearstea mathknow housepov mathprep classid schoolid childid class_mean school_mean
0 1 1 448 32 0.46 1.0 NaN 0.082 2.00 160 1 1 2.00 2.909091
1 0 1 460 109 -0.27 1.0 NaN 0.082 2.00 160 1 2 2.00 2.909091
2 1 1 511 56 -0.03 1.0 NaN 0.082 2.00 160 1 3 2.00 2.909091
3 0 1 449 83 -0.38 2.0 -0.11 0.082 3.25 217 1 4 3.25 2.909091
4 0 1 425 53 -0.03 2.0 -0.11 0.082 3.25 217 1 5 3.25 2.909091

The data has three distinct levels: (1) the child or pupil and their demographic attributes and outcome variable mathgain, (2) the classroom and the teacher level attributes such as their experience yearstea and a record of their mathematics courses taken mathprep, (3) school and neighbourhood level with features describing poverty measures in the vicinity housepov.

We’ll plot the child’s outcome mathgain against the mathprep and distinguish the patterns by school.

def rand_jitter(arr):
    stdev = .01 * (max(arr) - min(arr))
    return arr + np.random.randn(len(arr)) * stdev


schools = df['schoolid'].unique()
schools_10 = [schools[i:i+10] for i in range(0, len(schools), 10)]
fig, axs = plt.subplots(3,4, figsize=(20, 10), 
sharey=True, sharex=True)
axs = axs.flatten()
for s, ax in zip(schools_10, axs):
    temp = df[df['schoolid'].isin(s)]
    ax.scatter(rand_jitter(temp['mathprep']), temp['mathgain'], 
    c=temp['schoolid'], cmap='tab10')
    ax.set_title(f"Schools \n {s}");

There is a small number of observed students per school so the individual school level distributions show some extreme outliers but the overall distribution nicely converges to an approximately normal symmetric shape.

fig, axs = plt.subplots(1, 2, figsize=(10, 6))
axs = axs.flatten()
for school in schools:
    temp = df[df['schoolid'] ==school]
    axs[0].hist(temp['mathgain'], color='grey', alpha=0.3, density=True, histtype='step', cumulative=True)

axs[0].hist(df['mathgain'], bins=30, ec='black', density=True, cumulative=True, histtype='step')
axs[1].hist(df['mathgain'], bins=30, ec='black', density=True, cumulative=False)
axs[0].set_title("Cumulative Distribution Function by School")
axs[1].set_title("Overall Distribution");

With these kinds of structures we need to be careful in how we evaluate any treatment effects when there are reasons to believe in group-level effects that impact the outcome variable. Consider the following ways in which we could model the outcome and treatment.

Interaction Effects and Computational Complexity

y, X = dmatrices("mathgain ~ mathprep + C(schoolid)+ C(classid)", df, return_type="dataframe")
print(X.shape)

y, X1 = dmatrices("mathgain ~ mathprep + C(schoolid)/C(classid)", df, return_type="dataframe")
print(X1.shape)


y, X2 = dmatrices("mathgain ~ mathprep + C(schoolid):C(childid)", df, return_type="dataframe")
print(X2.shape)
(1190, 419)
(1190, 33385)
(1190, 127331)

We see here how different ways in which to account for group level variation and interaction effects lead to vastly inflated feature matrices. However not all interaction terms matter, or put another way… nor all the possible interactions feature in the data. So we have inflated the data matrix beyond necessity.

Here we define a helper function to parse a complex interaction formula, remove the columns entirely composed of zeros and return a new formula and dataframe which has a suitable range of features to capture the variation structures in the data.

def make_interactions_df(formula, df):
    y, X = dmatrices(formula, df, return_type="dataframe")
    n = X.shape[1]
    X = X[X.columns[~(np.abs(X) < 1e-12).all()]]
    n1 = X.shape[1]
    target_name = y.columns[0]
    d = pd.concat([y, X], axis=1)
    d.drop(['Intercept'], axis=1, inplace=True)
    d.columns = [c.replace('[', '').replace(']','').replace('C(', '').replace(')', '').replace('.', '_').replace(':', '_') for c in d.columns]
    cols = ' + '.join([col for col in d.columns if col != target_name])
    formula = f"{target_name} ~ {cols}"
    print(f"""Size of original interaction features: {n} \nSize of reduced feature set: {n1}""")
    return formula, d

formula, interaction_df = make_interactions_df("mathgain ~ mathprep + C(schoolid):C(childid)", df)

interaction_df.head()
Size of original interaction features: 127331 
Size of reduced feature set: 2370
mathgain childidT_2 childidT_3 childidT_4 childidT_5 childidT_6 childidT_7 childidT_8 childidT_9 childidT_10 ... schoolidT_107_childid1182 schoolidT_107_childid1183 schoolidT_107_childid1184 schoolidT_107_childid1185 schoolidT_107_childid1186 schoolidT_107_childid1187 schoolidT_107_childid1188 schoolidT_107_childid1189 schoolidT_107_childid1190 mathprep
0 32.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.00
1 109.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.00
2 56.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.00
3 83.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.25
4 53.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.25

5 rows × 2370 columns

We have reduced the number of interactions by an order of magnitude! We can now fit a regression model to the revised feature matrix.

Comparing Interaction Models

Consider the variation in the coefficient values estimated for mathprep as we add more and more interaction effects. The addition of interaction effects generates a large number of completely 0 interaction terms which we remove here.

formulas = ["""mathgain ~ mathprep + C(schoolid)""",
""" mathgain ~ mathprep + school_mean*class_mean""" , 
"""mathgain ~ mathprep + C(schoolid) + C(classid)""", 
"""mathgain ~ mathprep + C(schoolid)*C(classid)""",
"""mathgain ~ mathprep + C(classid):C(childid)""", 
]

estimates_df = []
for f in formulas:
    formula, interaction_df = make_interactions_df(f, df)
    result = smf.ols(formula, interaction_df).fit()
    estimates = [[result.params['mathprep']], list(result.conf_int().loc['mathprep', :]), [formula]]
    estimates = [e for est in estimates for e in est]
    estimates_df.append(estimates)

estimates_df = pd.DataFrame(estimates_df, columns=['mathprep_estimate', 'lower bound', 'upper bound', 'formula'])

estimates_df
Size of original interaction features: 108 
Size of reduced feature set: 108
Size of original interaction features: 5 
Size of reduced feature set: 5
Size of original interaction features: 419 
Size of reduced feature set: 419
Size of original interaction features: 33385 
Size of reduced feature set: 728
Size of original interaction features: 371281 
Size of reduced feature set: 2376
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/statsmodels/regression/linear_model.py:1717: RuntimeWarning: divide by zero encountered in scalar divide
  return np.dot(wresid, wresid) / self.df_resid
mathprep_estimate lower bound upper bound formula
0 1.060768 -1.435118 3.556655 mathgain ~ schoolidT_2 + schoolidT_3 + schooli...
1 1.789413 -1.587051 5.165876 mathgain ~ mathprep + school_mean + class_mean...
2 2.948546 0.601150 5.295942 mathgain ~ schoolidT_2 + schoolidT_3 + schooli...
3 3.545931 1.359793 5.732068 mathgain ~ schoolidT_2 + schoolidT_3 + schooli...
4 2.303187 NaN NaN mathgain ~ childidT_2 + childidT_3 + childidT_...

The point here (perhap obvious) is that the estimate of treatment effects due to some policy or programme can be differently understood when the regression model is able to account for increasing aspects of individual variation. Choice of the right way to “saturate” your regression specification are at the heart of causal inference. We will consider a number of specifications below that incorporate these group effects in a hierarchical model which nests the effect of class-membership within school membership. This choice allows us to control for group specific interactions without worrying about over-indexing on the observed interaction effects in the current data requiring that we handle more fixed effects parameters than we have data points.

model = bmb.Model(f"mathgain ~ mathprep + (1 | schoolid / classid)", df)
idata = model.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})

The model specification here is deliberately minimalist we want to observe how much of the variation in the outcome can be accounted for by solely adding extensive controls for interactions of group level effects.

model.graph()

We can see the derive sigma parameters here which can be understood as partialling out the variance of the outcome into components due to those group level effects and the unexplained residuals.

az.summary(idata, var_names=['Intercept', '1|schoolid_sigma', '1|schoolid:classid_sigma', 'mathgain_sigma', 'mathprep'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept 52.474 3.490 46.010 58.940 0.066 0.047 2779.0 2992.0 1.00
1|schoolid_sigma 8.478 2.092 4.326 12.323 0.108 0.077 449.0 340.0 1.01
1|schoolid:classid_sigma 9.970 2.398 5.842 14.819 0.128 0.090 380.0 454.0 1.01
mathgain_sigma 32.107 0.774 30.747 33.604 0.020 0.014 1454.0 2204.0 1.00
mathprep 1.924 1.239 -0.457 4.112 0.024 0.017 2587.0 2896.0 1.00

Note here the relative proportion of the school specifc variances 1|schoolid_sigma to the overall variance of the residuals mathgain_sigma.

Calculating the IntraClass Correlation Coefficient

These models faciliate the calculation of the ICC statistics which is a measure of “explained variance”. The thought is to gauge the proportion of variance ascribed to one set of random effects over and above the total estimated variance in the baseline model, including the residuals mathgain_sigma.

a = idata['posterior']['1|schoolid_sigma']**2

b = (idata['posterior']['1|schoolid:classid_sigma']**2 + idata['posterior']['1|schoolid_sigma']**2)

c = (idata['posterior']['1|schoolid:classid_sigma']**2 + idata['posterior']['1|schoolid_sigma']**2 + idata['posterior']['mathgain_sigma']**2)

(a / c).mean().item() 
0.06263191225502242
((a + b) / c).mean().item()
0.21170116243604015

We can see here that the interaction terms do seem to account for a goodly portion of the variance in the outcome and we ought to consider retaining their inclusion in our modelling work.

Augmenting the Models

Next we augment our model with more pupil level control variables aiming to pin down some of the aspects of the variation in the outcome.

model_fixed = bmb.Model(f"mathgain ~ mathkind + sex + minority + ses + mathprep + (1 | schoolid / classid)", df)
idata_fixed = model_fixed.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})
az.summary(idata_fixed, var_names=['Intercept', 
'mathkind', 'sex', 'minority', 'ses', 'mathprep',
'1|schoolid_sigma', '1|schoolid:classid_sigma', 'mathgain_sigma'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept 280.209 11.612 257.765 301.102 0.179 0.127 4194.0 2844.0 1.0
mathkind -0.469 0.023 -0.513 -0.428 0.000 0.000 4485.0 2834.0 1.0
sex -1.214 1.648 -4.183 2.063 0.023 0.024 5146.0 2608.0 1.0
minority -8.237 2.346 -12.488 -3.747 0.038 0.028 3757.0 3090.0 1.0
ses 5.383 1.256 2.987 7.679 0.019 0.014 4253.0 2797.0 1.0
mathprep 0.770 1.075 -1.294 2.740 0.018 0.014 3482.0 2755.0 1.0
1|schoolid_sigma 8.738 1.667 5.765 11.758 0.069 0.049 750.0 908.0 1.0
1|schoolid:classid_sigma 9.046 1.781 5.623 12.248 0.071 0.050 643.0 953.0 1.0
mathgain_sigma 27.162 0.643 25.883 28.309 0.013 0.009 2490.0 2619.0 1.0

Now we add a further class level control.

model_fixed_1 = bmb.Model(f"mathgain ~ mathkind + sex + minority + ses + yearstea + mathknow + mathprep + (1 | schoolid / classid)", df.dropna())
idata_fixed_1 = model_fixed_1.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})
az.summary(idata_fixed_1, var_names=['Intercept', 
'mathkind', 'sex', 'minority', 'ses', 'yearstea', 'mathknow', 'mathprep','1|schoolid_sigma', '1|schoolid:classid_sigma', 'mathgain_sigma'])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept 281.926 11.914 258.693 303.478 0.177 0.125 4527.0 2932.0 1.0
mathkind -0.475 0.023 -0.519 -0.433 0.000 0.000 4377.0 2633.0 1.0
sex -1.316 1.741 -4.722 1.775 0.022 0.023 6031.0 2865.0 1.0
minority -7.837 2.485 -12.532 -3.268 0.036 0.026 4673.0 3291.0 1.0
ses 5.408 1.330 3.035 8.028 0.019 0.013 5026.0 2869.0 1.0
yearstea 0.041 0.119 -0.168 0.268 0.002 0.002 3943.0 3049.0 1.0
mathknow 1.902 1.162 -0.295 4.094 0.020 0.015 3194.0 2785.0 1.0
mathprep 1.090 1.165 -1.087 3.369 0.020 0.016 3287.0 2343.0 1.0
1|schoolid_sigma 8.687 1.705 5.542 11.875 0.067 0.047 687.0 721.0 1.0
1|schoolid:classid_sigma 9.183 1.891 5.590 12.651 0.079 0.056 615.0 822.0 1.0
mathgain_sigma 26.795 0.673 25.558 28.086 0.014 0.010 2450.0 2350.0 1.0

We now make use of bambis model interpretation module to plot the marginal effect on the outcome due to changes in the treatment intensity.

fig, axs = plt.subplots(3, 1, figsize=(10, 15), 
dpi=120, sharey=True, sharex=True)
axs = axs.flatten()
bmb.interpret.plot_predictions(model, idata, "mathprep", ax=axs[0]);
bmb.interpret.plot_predictions(model_fixed, idata_fixed, "mathprep", ax=axs[1]);
bmb.interpret.plot_predictions(model_fixed_1, idata_fixed_1, "mathprep", ax=axs[2]);
axs[0].set_title("Baseline Interaction model")
axs[1].set_title("Class Level Controls \n and Interactions model")
axs[2].set_title("Neighbourhood and  Class Level Controls \n and Interactions model");

As we can see here across all the different model specifications we see modest effects of treatment with very wide uncertainty. So far, so what?! You might be sceptical that teacher training has any real discernible impact on child outcomes? Maybe you believe other interventions are more important to fund?

These kinds of questions determine policy. Misguided policy interventions on child-hood education can have radical consequences for the children. It’s, therefore, vital that we have robust and justifiable approaches to the analysis of these policy questions in the face of group level confounding.

Two Way Fixed Effects and Temporal Confounding

Difference in Differences designs are the overworked donkeys of social science. Many, many studies stand or fall by the assumptions baked into DiD designs. There are at least two aspects to these assumptions (i) the substantive commitments about the data generating process and (ii) the appropriateness of the functional form used to model (i). We will look first at a case where all the assumptions can be met, and then examine how things break-down.

Event Studies and Change in Time

We take this panel data set from the pyfixest.

url = "https://raw.githubusercontent.com/py-econometrics/pyfixest/master/pyfixest/did/data/df_het.csv"
df_het = pd.read_csv(url)
df_het.head()
unit state group unit_fe g year year_fe treat rel_year rel_year_binned error te te_dynamic dep_var
0 1 33 Group 2 7.043016 2010 1990 0.066159 False -20.0 -6 -0.086466 0 0.0 7.022709
1 1 33 Group 2 7.043016 2010 1991 -0.030980 False -19.0 -6 0.766593 0 0.0 7.778628
2 1 33 Group 2 7.043016 2010 1992 -0.119607 False -18.0 -6 1.512968 0 0.0 8.436377
3 1 33 Group 2 7.043016 2010 1993 0.126321 False -17.0 -6 0.021870 0 0.0 7.191207
4 1 33 Group 2 7.043016 2010 1994 -0.106921 False -16.0 -6 -0.017603 0 0.0 6.918492

Panel data of this kind is difficulty to envisage unless visualised.

fig, axs = plt.subplots(2, 1, figsize=(10, 12))
axs = axs.flatten()
for u in df_het['unit'].unique():
    temp = df_het[df_het['unit']==u]
    axs[0].plot(temp['year'], temp['dep_var'], color='grey', alpha=0.01)
    axs[1].plot(temp['year'], temp['dep_var'], color='grey', alpha=0.01)
df_het.groupby(['year', 'state'])[['dep_var']].mean().reset_index().pivot(index='year', columns='state', values='dep_var').plot(ax=axs[0], legend=False, color='blue', 
alpha=0.4)

df_het.groupby(['year', 'g'])[['dep_var']].mean().reset_index().pivot(index='year', columns='g', values='dep_var').plot(ax=axs[1], legend=False)

axs[0].set_title("Difference in Differences \n State Mean Change an Individual Trajectories")
axs[1].set_title("Difference in Differences \n Mean Change an Individual Trajectories");

Note how the blue line represents a cohort that never undergoes a treatment and is maintained as a coherent control group throughout the sequence even though we have two other cohorts.

TWFE in pyfixest

A natural way to interrogate these kinds of question is to wonder how the treatment effect evolves over time. Is it initially impactful with with quick plateau or a slowly building pattern of consistent growth?

fit_twfe_event = pf.feols(
    "dep_var ~ i(rel_year, ref=-1.0) | state + year ",
    df_het,
    vcov={"CRV1": "state"},
)

fit_twfe_event.tidy()
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
Estimate Std. Error t value Pr(>|t|) 2.5% 97.5%
Coefficient
C(rel_year, contr.treatment(base=-1.0))[T.-20.0] -0.099445 0.081699 -1.217202 0.230842 -0.264697 0.065808
C(rel_year, contr.treatment(base=-1.0))[T.-19.0] 0.000624 0.083213 0.007493 0.994060 -0.167690 0.168937
C(rel_year, contr.treatment(base=-1.0))[T.-18.0] 0.004125 0.089719 0.045974 0.963565 -0.177349 0.185599
C(rel_year, contr.treatment(base=-1.0))[T.-17.0] 0.021899 0.085686 0.255573 0.799624 -0.151418 0.195216
C(rel_year, contr.treatment(base=-1.0))[T.-16.0] -0.036933 0.096921 -0.381067 0.705221 -0.232975 0.159108
C(rel_year, contr.treatment(base=-1.0))[T.-15.0] 0.069578 0.081289 0.855936 0.397262 -0.094844 0.234000
C(rel_year, contr.treatment(base=-1.0))[T.-14.0] 0.037734 0.086618 0.435641 0.665499 -0.137467 0.212936
C(rel_year, contr.treatment(base=-1.0))[T.-13.0] 0.061779 0.083362 0.741098 0.463073 -0.106836 0.230394
C(rel_year, contr.treatment(base=-1.0))[T.-12.0] 0.089913 0.084900 1.059044 0.296095 -0.081814 0.261639
C(rel_year, contr.treatment(base=-1.0))[T.-11.0] 0.000982 0.079104 0.012417 0.990156 -0.159021 0.160986
C(rel_year, contr.treatment(base=-1.0))[T.-10.0] -0.113033 0.064922 -1.741047 0.089560 -0.244351 0.018285
C(rel_year, contr.treatment(base=-1.0))[T.-9.0] -0.069225 0.057046 -1.213481 0.232244 -0.184612 0.046162
C(rel_year, contr.treatment(base=-1.0))[T.-8.0] -0.061290 0.060362 -1.015370 0.316188 -0.183383 0.060804
C(rel_year, contr.treatment(base=-1.0))[T.-7.0] -0.002022 0.064602 -0.031306 0.975185 -0.132693 0.128648
C(rel_year, contr.treatment(base=-1.0))[T.-6.0] -0.055810 0.064674 -0.862938 0.393447 -0.186626 0.075006
C(rel_year, contr.treatment(base=-1.0))[T.-5.0] -0.065009 0.064810 -1.003066 0.322012 -0.196099 0.066082
C(rel_year, contr.treatment(base=-1.0))[T.-4.0] -0.009850 0.053098 -0.185505 0.853794 -0.117251 0.097551
C(rel_year, contr.treatment(base=-1.0))[T.-3.0] 0.046338 0.062950 0.736104 0.466072 -0.080991 0.173667
C(rel_year, contr.treatment(base=-1.0))[T.-2.0] 0.015947 0.065442 0.243687 0.808751 -0.116421 0.148316
C(rel_year, contr.treatment(base=-1.0))[T.0.0] 1.406404 0.055641 25.276422 0.000000 1.293860 1.518949
C(rel_year, contr.treatment(base=-1.0))[T.1.0] 1.660552 0.065033 25.533859 0.000000 1.529010 1.792094
C(rel_year, contr.treatment(base=-1.0))[T.2.0] 1.728790 0.056793 30.440100 0.000000 1.613915 1.843665
C(rel_year, contr.treatment(base=-1.0))[T.3.0] 1.854839 0.058315 31.807288 0.000000 1.736886 1.972792
C(rel_year, contr.treatment(base=-1.0))[T.4.0] 1.958676 0.071144 27.531232 0.000000 1.814774 2.102578
C(rel_year, contr.treatment(base=-1.0))[T.5.0] 2.082161 0.063855 32.607651 0.000000 1.953002 2.211320
C(rel_year, contr.treatment(base=-1.0))[T.6.0] 2.191062 0.068510 31.981462 0.000000 2.052487 2.329637
C(rel_year, contr.treatment(base=-1.0))[T.7.0] 2.279073 0.075014 30.381921 0.000000 2.127342 2.430803
C(rel_year, contr.treatment(base=-1.0))[T.8.0] 2.364593 0.058598 40.352477 0.000000 2.246067 2.483120
C(rel_year, contr.treatment(base=-1.0))[T.9.0] 2.372163 0.056560 41.940696 0.000000 2.257759 2.486566
C(rel_year, contr.treatment(base=-1.0))[T.10.0] 2.649271 0.056177 47.158942 0.000000 2.535641 2.762901
C(rel_year, contr.treatment(base=-1.0))[T.11.0] 2.753591 0.075526 36.458951 0.000000 2.600826 2.906356
C(rel_year, contr.treatment(base=-1.0))[T.12.0] 2.813935 0.079320 35.475900 0.000000 2.653496 2.974375
C(rel_year, contr.treatment(base=-1.0))[T.13.0] 2.756070 0.078368 35.168251 0.000000 2.597555 2.914584
C(rel_year, contr.treatment(base=-1.0))[T.14.0] 2.863427 0.098389 29.103072 0.000000 2.664416 3.062438
C(rel_year, contr.treatment(base=-1.0))[T.15.0] 2.986652 0.093309 32.008066 0.000000 2.797916 3.175388
C(rel_year, contr.treatment(base=-1.0))[T.16.0] 2.963032 0.085427 34.684870 0.000000 2.790239 3.135825
C(rel_year, contr.treatment(base=-1.0))[T.17.0] 2.972596 0.092390 32.174553 0.000000 2.785721 3.159472
C(rel_year, contr.treatment(base=-1.0))[T.18.0] 2.935051 0.094126 31.182203 0.000000 2.744664 3.125439
C(rel_year, contr.treatment(base=-1.0))[T.19.0] 2.918707 0.084193 34.667036 0.000000 2.748411 3.089002
C(rel_year, contr.treatment(base=-1.0))[T.20.0] 2.979701 0.087777 33.946394 0.000000 2.802156 3.157246
C(rel_year, contr.treatment(base=-1.0))[T.inf] 0.128053 0.078513 1.630976 0.110946 -0.030755 0.286861

The model specification defines indicator variables for the relative years before and after the penultimate year before treatment is applied, it then includes the fixed effects for state and year indicators. The coefficient values ascribed the the relative year indicators are used to plot the event-study trajectories. This is a two-way fixed effects estimation routine where the fixed effects for state and year indicators absorb the variance due to those groupings.

figsize = [1200, 900]
fit_twfe_event.iplot(
    coord_flip=False,
    title="TWFE-Estimator",
    figsize=figsize,
    xintercept=18.5,
    yintercept=0,
).show()

We can also aim to marginalise over the details of the state trajectories by defining the similar estimation routine on the individuals are their treatment indicator.

fit_twfe = pf.feols(
    "dep_var ~ i(treat) | unit + year",
    df_het,
    vcov={"CRV1": "state"},
)

fit_twfe.tidy()
Estimate Std. Error t value Pr(>|t|) 2.5% 97.5%
Coefficient
C(treat)[T.True] 1.98254 0.019331 102.55618 0.0 1.943439 2.021642

De-meaning and TWFEs

We’ve seen above that the fixed-effect estimators in these DiD designs involve a lot of indicator variables. These are largely not the focus on the question at hand but are used exlusively to absorb the noise that takes away from our understanding of the treatment effect. We can achieve similar results with less parameters required if we “de-mean” the focus variables by the group averages of the control factors of state and year or unit. This operation which makes for more efficient TWFE estimation routines is provably a variety of mundlak regression.

def demean(df, col_to_demean, group):
    return df.assign(**{col_to_demean: (df[col_to_demean]
                                        - df.groupby(group)[col_to_demean].transform("mean")
                                        )})


def apply_demeaning(df_het, by=['state', 'year'], event=True):
    if event: 
        d = pd.get_dummies(df_het['rel_year']).drop(-1, axis=1) 
        d.columns = ['rel_year_' +str(c).replace('-', 'minus_') for c in d.columns]
    else:
        d = df_het[['treat']]
    d[by[0]] = df_het[by[0]]
    d[by[1]] = df_het[by[1]]
    for col in d.columns: 
        if col in by:
            pass
        else: 
            for c in by:
                d = demean(d, col, c)
    d = d.drop(by, axis=1)
    d['dep_var'] = df_het['dep_var']
    return d

d_event = apply_demeaning(df_het, by=['state', 'year'], event=True)
d = apply_demeaning(df_het, by=['unit', 'year'], event=False)

d_event.head()
/var/folders/__/ng_3_9pn1f11ftyml_qr69vh0000gn/T/ipykernel_40828/822530906.py:13: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  d[by[0]] = df_het[by[0]]
/var/folders/__/ng_3_9pn1f11ftyml_qr69vh0000gn/T/ipykernel_40828/822530906.py:14: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  d[by[1]] = df_het[by[1]]
rel_year_minus_20.0 rel_year_minus_19.0 rel_year_minus_18.0 rel_year_minus_17.0 rel_year_minus_16.0 rel_year_minus_15.0 rel_year_minus_14.0 rel_year_minus_13.0 rel_year_minus_12.0 rel_year_minus_11.0 ... rel_year_13.0 rel_year_14.0 rel_year_15.0 rel_year_16.0 rel_year_17.0 rel_year_18.0 rel_year_19.0 rel_year_20.0 rel_year_inf dep_var
0 0.651694 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 ... 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 -0.28125 7.022709
1 -0.002973 0.651694 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 ... 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 -0.28125 7.778628
2 -0.002973 -0.002973 0.651694 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 ... 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 -0.28125 8.436377
3 -0.002973 -0.002973 -0.002973 0.651694 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 ... 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 -0.28125 7.191207
4 -0.002973 -0.002973 -0.002973 -0.002973 0.651694 -0.002973 -0.002973 -0.002973 -0.002973 -0.002973 ... 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 0.000734 -0.28125 6.918492

5 rows × 42 columns

We now have a data set with 42 columns focused on the treatment structures but that implicitly controls for the variation due to state and time. We’ll see below that this representation of the data will correctly estimate the treatment effects.

Event Study and De-Meaning

Now we’ll use the de-meaned data structure above to estimate an event study using Bambi.

x_cols = ' + '.join([c for c in d_event.columns if c != 'dep_var'])
model_twfe_event = bmb.Model(f"dep_var ~ + {x_cols}", d_event)
idata_twfe_event = model_twfe_event.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
model_twfe_event
       Formula: dep_var ~ + rel_year_minus_20.0 + rel_year_minus_19.0 + rel_year_minus_18.0 + rel_year_minus_17.0 + rel_year_minus_16.0 + rel_year_minus_15.0 + rel_year_minus_14.0 + rel_year_minus_13.0 + rel_year_minus_12.0 + rel_year_minus_11.0 + rel_year_minus_10.0 + rel_year_minus_9.0 + rel_year_minus_8.0 + rel_year_minus_7.0 + rel_year_minus_6.0 + rel_year_minus_5.0 + rel_year_minus_4.0 + rel_year_minus_3.0 + rel_year_minus_2.0 + rel_year_0.0 + rel_year_1.0 + rel_year_2.0 + rel_year_3.0 + rel_year_4.0 + rel_year_5.0 + rel_year_6.0 + rel_year_7.0 + rel_year_8.0 + rel_year_9.0 + rel_year_10.0 + rel_year_11.0 + rel_year_12.0 + rel_year_13.0 + rel_year_14.0 + rel_year_15.0 + rel_year_16.0 + rel_year_17.0 + rel_year_18.0 + rel_year_19.0 + rel_year_20.0 + rel_year_inf
        Family: gaussian
          Link: mu = identity
  Observations: 46500
        Priors: 
    target = mu
        Common-level effects
            Intercept ~ Normal(mu: 4.7062, sigma: 7.1549)
            rel_year_minus_20.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_19.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_18.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_17.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_16.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_15.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_14.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_13.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_12.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_11.0 ~ Normal(mu: 0.0, sigma: 83.8218)
            rel_year_minus_10.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_9.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_8.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_7.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_6.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_5.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_4.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_3.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_minus_2.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_0.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_1.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_2.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_3.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_4.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_5.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_6.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_7.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_8.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_9.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_10.0 ~ Normal(mu: 0.0, sigma: 60.2319)
            rel_year_11.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_12.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_13.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_14.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_15.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_16.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_17.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_18.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_19.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_20.0 ~ Normal(mu: 0.0, sigma: 86.6805)
            rel_year_inf ~ Normal(mu: 0.0, sigma: 15.2312)
        
        Auxiliary parameters
            sigma ~ HalfStudentT(nu: 4.0, sigma: 2.862)
------
* To see a plot of the priors call the .plot_priors() method.
* To see a summary or plot of the posterior pass the object returned by .fit() to az.summary() or az.plot_trace()

We can then plot the event-study and observe a similar pattern to the one observed with pyfixest.

def plot_event_study(idata, ax, color='blue', model='demeaned'):
    summary_df = az.summary(idata)
    cols = [i for i in summary_df.index if 'rel' in i]
    summary_df = summary_df[summary_df.index.isin(cols)]
    x = range(len(summary_df))
    ax.scatter(x, summary_df['mean'], label=model, color=color)
    ax.plot([x, x], [summary_df['hdi_3%'],summary_df['hdi_97%']],   color=color)
    ax.set_title("Event Study", fontsize=20)
    return ax

fig, ax = plt.subplots(figsize=(10, 7))
plot_event_study(idata_twfe_event, ax)
ax.legend()
<matplotlib.legend.Legend at 0x2fb708c50>

Similarly, we can de-mean the simple treatment indicator using the group means and marginalise over time periods to find a single treatment effect estimate.

model_twfe_trt_demean = bmb.Model(f"dep_var ~ treat", d)
idata_twfe_trt_demean = model_twfe_trt_demean.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
az.summary(idata_twfe_trt_demean)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
Intercept 4.706 0.013 4.681 4.730 0.000 0.000 3370.0 2753.0 1.0
treat 1.983 0.048 1.889 2.066 0.001 0.001 3129.0 3090.0 1.0
dep_var_sigma 2.811 0.009 2.793 2.828 0.000 0.000 5607.0 3034.0 1.0

Which again accords with the reported values from pyfixest. This is equivalent to using a Mundlak device as we can see below:

TWFE by Mundlak Device

df_het['unit_mean'] = df_het.groupby('unit')['treat'].transform(np.mean)
df_het['time_mean'] = df_het.groupby('year')['treat'].transform(np.mean)

model_twfe_trt = bmb.Model(f"dep_var ~ treat", df_het)
idata_twfe_trt = model_twfe_trt.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)

model_twfe_trt_mundlak = bmb.Model(f"dep_var ~ treat + unit_mean + time_mean", df_het)
idata_twfe_trt_mundlak = model_twfe_trt_mundlak.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True},)
az.plot_forest([idata_twfe_trt_demean, idata_twfe_trt_mundlak, idata_twfe_trt], combined=True, var_names=['treat'], model_names=['De-meaned', 'Mundlak', 'Simple']);

We’ve seen here how the de-meaned TWFE estimator and the Mundlak specification result in identical estimates and differ from the naive estimate that fails to control group level confounds.

Functional Form and Saturated Regression

We’ve seen how the vanilla TWFE estimator can successfully recover the treatment effects and facilitate event studies. However, the details of the estimation matter because this functional form is not always robust. Here we’ll see other options that can recover substantially the same inferences and may prove more robust as we’ll see below. The key to each is to articulate enough structural features that allow the model to modify effects based on the suspected group level confounds.

df_het['state_mean'] = df_het.groupby('state')['treat'].transform(np.mean)
df_het['time_mean'] = df_het.groupby('year')['treat'].transform(np.mean)
df_het['cohort_mean'] = df_het.groupby('group')['treat'].transform(np.mean)

model_twfe_event_1 = bmb.Model(f"dep_var ~ 1 + C(year) + state_mean + C(rel_year, Treatment(reference=-1)) ", df_het)
idata_twfe_event_1 = model_twfe_event_1.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})


formula = """ dep_var ~ 1 + time_mean:state_mean + C(rel_year, Treatment(reference=-1))"""
twfe_model_ols = smf.ols(formula, data=df_het).fit()
twfe_model_ols.summary()
param_est = pd.DataFrame(twfe_model_ols.params, columns=['estimate']).iloc[1:-1]
param_est['index_number'] = list(range(len(param_est)))
temp = (param_est.reset_index()
)
param_est = temp[(~temp['index'].str.contains(':')) & (temp['index'].str.contains('rel'))]
param_est.reset_index(inplace=True)


model_twfe_event_2 = bmb.Model(f"dep_var ~ (1 | year) + state_mean + C(rel_year, Treatment(reference=-1)) ", df_het)
idata_twfe_event_2 = model_twfe_event_2.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})

Having estimated the various alternatives model specifications we compare each against our baseline de-meaned event-study.

fig, axs = plt.subplots(2, 2, figsize=(20, 10))
axs = axs.flatten()
plot_event_study(idata_twfe_event, axs[0], model='Manual DeMeaned')
plot_event_study(idata_twfe_event, axs[1], model='Manual DeMeaned')
plot_event_study(idata_twfe_event, axs[2], model='Manual DeMeaned')
plot_event_study(idata_twfe_event, axs[3], model='Manual DeMeaned')
plot_event_study(idata_twfe_event_1, axs[0], color='green', model='Fixed Effects Saturated Bayes')
plot_event_study(idata_twfe_event_2, axs[1], color='purple', model='Hierarchical Effects Saturated Bayes')
axs[2].scatter(param_est['index'], param_est['estimate'], color='red', label='Mundlak Interaction Features OLS')
tidy = fit_twfe_event.tidy()
xs = range(len(tidy))
tidy.reset_index(inplace=True)
axs[3].scatter(xs, tidy['Estimate'], color='orange', label='pyfixest TWFE')
axs[3].plot([xs, xs], [tidy['2.5%'],tidy['97.5%']], color='orange')
axs[2].set_xticks([])
axs[0].set_title("dep_var ~ 1 + C(year) + state_mean + C(rel_year, Treatment(reference=-1))")
axs[1].set_title("dep_var ~ (1 | year) + state_mean + C(rel_year, Treatment(reference=-1))")
axs[2].set_title("dep_var ~ 1 + time_mean:state_mean + C(rel_year, Treatment(reference=-1))")
axs[3].set_title("dep_var ~ i(rel_year, ref=-1.0) | state + year")
axs[0].legend()
axs[1].legend()
axs[2].legend()
axs[3].legend();

`

This suggests that there are a variety of functional forms even just using regression specifications that seek to control from different types of group level confounding. In this example data most of the functional forms that seek to control for time and state level effects seem to converge in their answers. We will now switch to an example where the vanilla TWFE breaks down.

Issues with TWFE and Richly Parameterised Linear Models

We draw on the example from Pedro Sant’Anna here where it is demonstrated that the vanilla TWFE estimator breaks down under various conditions. These conditions are often related to staggered roll out of a treatment and thereby induces dynamic changes in the treatment group over time. Appropriate inference needs to carefully control for the interaction effects due to staggered treatment.

Let’s generate some data.

true_mu = 1

def make_data(nobs = 1000, nstates = 40):
    ids = list(range(nobs))
    states = np.random.choice(range(nstates), size=nobs, replace=True)
    unit = pd.DataFrame({'unit': ids, 
                        'state': states, 
                        'unit_fe': np.random.normal(states/5, 1, size=nobs),
                        'mu': true_mu})
    
    year = pd.DataFrame({'year': pd.date_range('1980-01-01', '2010-01-01', freq='y'), 
    'year_fe': np.random.normal(0, 1, 30) })
    year['year'] = year['year'].dt.year

    treat_taus = pd.DataFrame({'state': np.random.choice(range(nstates), size=nstates, replace=False),
    'cohort_year': np.sort([1986, 1992, 1998, 2004]*10)
    })

    cross_join = pd.DataFrame([row for row in product(range(nobs), year['year'].unique())], columns =['unit', 'year'])
    cross_join = cross_join.merge(unit, how='left', left_on='unit', 
    right_on='unit')
    cross_join = cross_join.merge(year, how='left', left_on='year', 
    right_on='year')
    cross_join = cross_join.merge(treat_taus, how='left', left_on='state', right_on='state')
    cross_join = cross_join.assign(
        error = np.random.normal(0, 1, len(cross_join)),
        treat = lambda x: np.where(x['year'] >= x['cohort_year'], 1, 0)
    )
    cross_join = cross_join.assign(tau = np.where(cross_join['treat'] == 1, cross_join['mu'], 0), 
    ).assign(year_fe = lambda x: x['year_fe'] + 0.1*(x['year']-x['cohort_year']))

    cross_join['tau_cum'] = cross_join.groupby('unit')['tau'].transform(np.cumsum)
    cross_join = cross_join.assign(dep_var = lambda x: 2010-x['cohort_year'] + 
    x['unit_fe'] + x['year_fe'] + x['tau_cum'] + x['error'])
    cross_join['rel_year'] =  cross_join['year'] - cross_join['cohort_year']

    
    return cross_join

sim_df = make_data(500, 40)
sim_df.head()
/var/folders/__/ng_3_9pn1f11ftyml_qr69vh0000gn/T/ipykernel_40828/311052958.py:11: FutureWarning: 'y' is deprecated and will be removed in a future version, please use 'YE' instead.
  year = pd.DataFrame({'year': pd.date_range('1980-01-01', '2010-01-01', freq='y'),
/var/folders/__/ng_3_9pn1f11ftyml_qr69vh0000gn/T/ipykernel_40828/311052958.py:32: FutureWarning: The provided callable <function cumsum at 0x1093f89a0> is currently using SeriesGroupBy.cumsum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "cumsum" instead.
  cross_join['tau_cum'] = cross_join.groupby('unit')['tau'].transform(np.cumsum)
unit year state unit_fe mu year_fe cohort_year error treat tau tau_cum dep_var rel_year
0 0 1980 12 2.770007 1 -1.443538 1998 -0.883541 0 0 0 12.442927 -18
1 0 1981 12 2.770007 1 -3.140326 1998 1.044140 0 0 0 12.673821 -17
2 0 1982 12 2.770007 1 -2.430673 1998 0.276021 0 0 0 12.615355 -16
3 0 1983 12 2.770007 1 -4.253994 1998 0.181049 0 0 0 10.697061 -15
4 0 1984 12 2.770007 1 -3.347429 1998 -2.104055 0 0 0 9.318523 -14

We can now plot the staggered nature of the imagined treatment regime.

fig, ax = plt.subplots(figsize=(10, 6))

for unit in sim_df['unit'].unique()[0:100]:
    temp = sim_df[sim_df['unit'] == unit]
    ax.plot(temp['year'], temp['dep_var'], alpha=0.1, color='grey')

sim_df.groupby(['cohort_year', 'year'])[['dep_var']].mean().reset_index().pivot(index='year', columns='cohort_year', values='dep_var').plot(ax=ax)
ax.axvline(1986)
ax.axvline(1992, color='orange')
ax.axvline(1998, color='green')
ax.axvline(2004, color='red')
ax.set_title("Simulated Cohorts Homogenous Treatment Effects \n All Eventually Treated", fontsize=20)
ax.legend()
<matplotlib.legend.Legend at 0x2f483fa50>

This data will present problems for the vanilla TWFE estimator in part because we can see how each cohort receives a treatment and there are periods in the data when no group is in the “control”. We can see how this plays out with de-meaning TWFE strategy.

from pyfixest.did.estimation import did2s, lpdid
fit_twfe = pf.feols(
    "dep_var ~ i(rel_year, ref=-1.0) | state + year",
    sim_df,
    vcov={"CRV1": "state"},
)


figsize = [1200, 400]
fit_twfe.iplot(
    coord_flip=False,
    title="TWFE-Estimator",
    figsize=figsize,
    xintercept=18.5,
    yintercept=0,
).show()
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/pyfixest/estimation/feols_.py:1987: UserWarning: 
            The following variables are collinear: ['C(rel_year, contr.treatment(base=-1.0))[T.18]', 'C(rel_year, contr.treatment(base=-1.0))[T.19]', 'C(rel_year, contr.treatment(base=-1.0))[T.20]', 'C(rel_year, contr.treatment(base=-1.0))[T.21]', 'C(rel_year, contr.treatment(base=-1.0))[T.22]', 'C(rel_year, contr.treatment(base=-1.0))[T.23]'].
            The variables are dropped from the model.
            
  warnings.warn(

This is not the expected pattern. For contrast, consider an alternative estimator.

fit_lpdid = lpdid(
    data=sim_df,
    yname="dep_var",
    gname="cohort_year",
    tname="year",
    idname="unit",
    vcov={"CRV1": "state"},
    pre_window=-17,
    post_window=17,
    att=False,
)

fit_lpdid.iplot(
    coord_flip=False,
    title="Local-Projections-Estimator",
    figsize=figsize,
    yintercept=0,
    xintercept=18.5,
).show()

The initial TWFE estimate is frankly bizarre and utterly skewed. Something dreadful has gone wrong under the hood. For contrast, we’ve included the Local Projections estimator from the pyfixest to show that we can recover the actual treatment effect under this event study with alternative strategies. However, there is more machinary involves in the local-projections estimator. Instead we want show how to use mundlak devices to recover more reasonable estimates. No fancy estimators, just more regressions.

sim_df['unit_mean'] = sim_df.groupby('unit')['treat'].transform(np.mean)

sim_df['state_mean'] = sim_df.groupby('state')['treat'].transform(np.mean)

sim_df['cohort_mean'] = sim_df.groupby('cohort_year')['treat'].transform(np.mean)

sim_df['time_mean'] = sim_df.groupby('year')['treat'].transform(np.mean)


model_twfe = bmb.Model(f"""dep_var ~ 1  + time_mean + state_mean + C(cohort_year) + C(rel_year, Treatment(reference=-1))""", sim_df)

idata_twfe = model_twfe.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})


model_twfe1 = bmb.Model(f"""dep_var ~ 1  + time_mean* state_mean + C(cohort_year) + C(rel_year, Treatment(reference=-1))""", sim_df)

idata_twfe1 = model_twfe1.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})


model_twfe2 = bmb.Model(f"""dep_var ~ 1  + cohort_mean: state_mean + C(rel_year, Treatment(reference=-1))""", sim_df)

idata_twfe2 = model_twfe2.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})

model_twfe3 = bmb.Model(f"""dep_var ~ (1| year)  + state_mean + C(rel_year, Treatment(reference=-1))""", sim_df)
idata_twfe3 = model_twfe3.fit( inference_method="nuts_numpyro",
    idata_kwargs={"log_likelihood": True})

These latter models will recover the appropriate treatment effects with slight variations due to the functional form.

fig, axs = plt.subplots(4, 1, figsize=(10, 15), sharey=True)
axs = axs.flatten()
plot_event_study(idata_twfe, ax=axs[0], model='Additive Mundlak')
plot_event_study(idata_twfe1, ax=axs[1], color='red', model='Mundlak State & Time Interactions')
plot_event_study(idata_twfe2, ax=axs[2], color='green', model='Mundlak Cohort & State Interactions')
plot_event_study(idata_twfe3, ax=axs[3], color='purple', model='Mundlak Cohort & State Interactions')
axs[0].set_title("dep_var ~ 1  + time_mean + state_mean + C(rel_year, Treatment(reference=-1))")
axs[1].set_title("dep_var ~ 1  + time_mean* state_mean + C(rel_year, Treatment(reference=-1))")
axs[2].set_title("dep_var ~ 1  + cohort_mean:state_mean + C(rel_year, Treatment(reference=-1))")
axs[3].set_title("dep_var ~ (1| year)  + state_mean + C(rel_year, Treatment(reference=-1))")
axs[0].legend()
axs[1].legend()
<matplotlib.legend.Legend at 0x2c3f6b590>

Note how the the naive mundlak approach replicates the odd behaviour we saw in the TWFE estimation routine above. Adding additional interactions and controlling for the staggered launch dates seems to help isolate the real pattern in the data.

az.compare({'fe_mundlak_naive': idata_twfe, 
            'mundlak_state_time_interactions_cohort': idata_twfe1, 
            'mundlak_cohort_state_interactions': idata_twfe2, 
            'mundlak_state_hierarchical_year': idata_twfe3})
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/arviz/stats/stats.py:309: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'False' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
  df_comp.loc[val] = (
/Users/nathanielforde/mambaforge/envs/pymc_causal/lib/python3.11/site-packages/arviz/stats/stats.py:309: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'log' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
  df_comp.loc[val] = (
rank elpd_loo p_loo elpd_diff weight se dse warning scale
mundlak_state_hierarchical_year 0 -35912.318137 72.047678 0.000000 0.815446 72.024574 0.000000 False log
mundlak_state_time_interactions_cohort 1 -36426.674710 52.463753 514.356573 0.184555 74.292009 41.143444 False log
fe_mundlak_naive 2 -36589.674136 52.184966 677.355999 0.000001 75.288331 43.264840 False log
mundlak_cohort_state_interactions 3 -38253.480971 46.432289 2341.162834 0.000000 78.061310 49.584640 False log

Conclusion

Citation

BibTeX citation:
@online{forde2024,
  author = {Nathaniel Forde},
  title = {Freedom, {Hierarchies} and {Saturated} {Regression} {(WIP)}},
  date = {2024-07-01},
  langid = {en}
}
For attribution, please cite this work as:
Nathaniel Forde. 2024. “Freedom, Hierarchies and Saturated Regression (WIP).” July 1, 2024.